# DPO alignment optimization
Tinyllama 1.1B Chat V1.0
Apache-2.0
TinyLlama is a lightweight 1.1B-parameter Llama model pre-trained on 3 trillion tokens, fine-tuned for conversation and alignment optimized, suitable for resource-constrained scenarios.
Large Language Model
Transformers English

T
TinyLlama
1.4M
1,237
Tinyllama 1.1B Chat V0.6
Apache-2.0
TinyLlama is a 1.1 billion parameter Llama model pre-trained on 3 trillion tokens, suitable for scenarios with limited computation and memory resources.
Large Language Model English
T
TinyLlama
11.60k
98
Featured Recommended AI Models